Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up drop database #6623

Merged
merged 1 commit into from
May 13, 2016
Merged

Speed up drop database #6623

merged 1 commit into from
May 13, 2016

Conversation

jwilder
Copy link
Contributor

@jwilder jwilder commented May 13, 2016

Required for all non-trivial PRs
  • Rebased/mergable
  • Tests pass
  • CHANGELOG.md updated

Drop database was closing and deleting each shard dir individually and
serially. It would then delete the empty database dirs.

This changes drop database to close all shards in parallel and run
one os.RemoveAll to remove everything under the db dir which is more
efficient.

This also reworked the locking to avoid locking the tsdb.Store for
long periods of time. That can cause queries and writes for other
databases to block as well.

@mention-bot
Copy link

By analyzing the blame information on this pull request, we identified @e-dard to be a potential reviewer

Drop database was closing and deleting each shard dir individually and
serially.  It would then delete the empty database dirs.

This changes drop database to close all shards in parallel and run
one os.RemoveAll to remove everything under the db dir which is more
efficient.

This also reworked the locking to avoid locking the tsdb.Store for
long periods of time.  That can cause queries and writes for other
databases to block as well.
@jwilder jwilder force-pushed the jw-drop-database branch from 16fa5ae to 9e54adc Compare May 13, 2016 16:26
@jwilder jwilder added this to the 1.0.0 milestone May 13, 2016
}
}
s.mu.RUnlock()
wg.Wait()
close(responses)
Copy link
Contributor

@e-dard e-dard May 13, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could further improve this by getting rid of the WaitGroup and counting how many responses you have received in your for loop further down. Once you have len(s.shards) responses you can close the channel and exit the loop.

This has the added benefit that you don't have to wait for all the shards to attempt to close themselves, before finding out that one of them early on errored out anyway.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, thinking about it I guess you would probably want to carry on deleting the shards after an error anyway right? So then the waitgroup does make sense..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Wanted to have them all try and either complete or fail before returning an error.

@corylanou
Copy link
Contributor

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants